location Repository: https://github.com/sarkadava/multiple_webcam_recording_for3Dtracking
location Jupyter notebook: https://github.com/sarkadava/multiple_webcam_recording_for3Dtracking/blob/main/webcam_scripts.ipynb
Please install the packages in requirements.txt
citation for this module: Kadavá, S., Snelders, J., Pouw, W. (2024). Recording from Multiple Webcams Synchronously while LSL Streaming [the day you viewed the site]. Retrieved from: https://envisionbox.org/multiple_webcam_record.html
In some of the modules on envisionBox we perform 3D tracking on multiple cameras that are recording synchronously. Before we started using those types of methods, we found that it is non-trivial to do actual recordings from multiple cameras in a synchronous way. Therefore we share a script here that allows to record from three webcams while also streaming information about the framenumbers to an LSL stream.
Through trial and error, we found that ffmpegcv was the most stable solution for recording 3 webcams simultaneously in a synchronous way. A good way to test whether your webcams are synchronous is holding a stopwatch in front of all cameras, recording the videos, and compare for each frame if all videos show the same time.
Often you want to combine audio and other signals with video. LabStreamingLayer is a very robust solution for this. LSL operates by collecting signal streams. In this script we create one such stream, whereby we stream the frame number f(t) at some time t. In LSL recording (using LSL labrecorder) you can write the stream to a file, and then LSL will store the framenumber alongside a common timestamp. You can then later align your frame number with the common time stamp, thereby ensuring a) that even when frames are dropped or collected with different intervals you can give the actual time of the frame recording, and b) you can align your other signals with very high precision to the frame (which may for example be the basis for your kinematic measurements). For example, we might also stream a accelerometer signal on a different PC on the network, and this stream will also be collected with a LSL recorder and given a common timestamp. Since the acceleration signal and the framenumbers are timestamped with a common clock (note different systems generally have different clocks) they can be synced and aligned.
We are here assuming that you are already working with LSL. At some later moment we might do a LSL tutorial if needed. If you just want to use the script for recording that is fine too, you could either leave the script as is, or you can comment out parts that refer to LSL (and just write the videos to a disc).
We want to thank Pascal de Water/) at the Donders Institute for Brain, Cognition and Behaviour for first helping us to a demo script that does streaming and webcam recording. The current script is a heavily adapted version (making use of ffmpegcv instaed of opencv2) of that original script.
import cv2 # for video processing functions
import datetime # for time registration
import time # for time registration
from pylsl import StreamInfo, StreamOutlet, local_clock # for LSL streaming
import threading # for creating threads to do multiple things at once
import ctypes # data formatting
import sys # general functions
import os # general functions
import ffmpegcv # important package for saving the videos quickly
import tqdm #progressbar
# presets
cams = [0, 1, 2] # change numbers if cameras not displayed
set_framerate = 30
# Define the resolution
width = 960
height = 540
# Recording main
print(sys.version)
# labstreaminglayer sets
#set sleep to 1ms accuracy
winmm = ctypes.WinDLL('winmm')
winmm.timeBeginPeriod(1)
# setup streaming capture device
def sendLSLFrames(camera_thread):
stamp = local_clock()
while camera_thread.is_alive():
time.sleep(0.001)
while local_clock() < stamp:
pass
stamp = local_clock() + (1.0/freq)
outlet.push_sample([frame_counter1])#, local_clock())
# open the three cameras and return as variables
def open_cameras():
# Open the cameras and set the resolution
cap1 = cv2.VideoCapture(cams[0], cv2.CAP_DSHOW)
cap1.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cap1.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
print("Camera 1 opened")
cap2 = cv2.VideoCapture(cams[1], cv2.CAP_DSHOW)
cap2.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cap2.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
print("Camera 2 opened")
cap3 = cv2.VideoCapture(cams[2], cv2.CAP_DSHOW)
cap3.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cap3.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
print("Camera 3 opened")
return cap1, cap2, cap3
# MAIN CAMERA FUNCTION
def getWebcamData(cap1, cap2, cap3, video_writer):
global frame_counter1
global frame_counter2
global frame_counter3
prev = 0
framecounter_fr = 0
running_framerate = 0
# main camera loop
while True:
# read frames from each webcam stream
frames = read_frames(cap1, cap2, cap3)
if len(frames) == 1: # If read_frames returned error code, break main loop
break
frame1, frame2, frame3 = frames
# added to make sure that cams are synchronized
time_elapsed = time.time() - prev
if time_elapsed > 1. / frame_rate:
prev = time.time()
# frame counter
frame_counter1 += 1
frame_counter2 += 1
frame_counter3 += 1
# estimate the frame rate after some initial ramp up phase
if frame_counter1 == 1000:
framecounter_fr += 1
timegetfor_fr = time.time()
elif frame_counter1 >= 1001:
framecounter_fr += 1
timepassed_fr = timegetfor_fr - time.time()
running_framerate = abs(round(framecounter_fr / timepassed_fr, 2))
# combine frames for display and VideoWriter
combined_frames, combined_frames_dis = combine_frames(frame1, frame2, frame3, running_framerate)
# write combined frames to the VideoWriter
video_writer.write(combined_frames)
# display the combined frames
cv2.imshow('Webcam Streams', combined_frames_dis)
# check for the 'q' key to exit
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_writer.release()
# release the webcam resources
cap1.release()
cap2.release()
cap3.release()
# close the display window
cv2.destroyAllWindows()
# read frames from 3 cameras, returns list of either frames or error code
def read_frames(cap1, cap2, cap3):
ret1, frame1 = cap1.read() # read frame camera one
if not ret1:
print("Can't receive frame from camera one. Exiting...")
return [-1]
ret2, frame2 = cap2.read() # read frame camera two
if not ret2:
print("Can't receive frame from camera two. Exiting...")
return [-1]
ret3, frame3 = cap3.read() # read frame camera three
if not ret3:
print("Can't receive frame from camera three. Exiting...")
return [-1]
return [frame1, frame2, frame3]
# combines frames to instances for display and video writer, returns instances
def combine_frames(frame1, frame2, frame3, framerate):
# rotate the frames
frame1 = cv2.rotate(frame1, cv2.ROTATE_90_CLOCKWISE) # rotate image
frame2 = cv2.rotate(frame2, cv2.ROTATE_90_CLOCKWISE) # rotate image
frame3 = cv2.rotate(frame3, cv2.ROTATE_90_CLOCKWISE) # rotate image
# add info to show on screen
cv2.putText(frame1, str(frame_counter1), (20, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0))
cv2.putText(frame2, str(frame_counter2), (20, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0))
cv2.putText(frame3, str(frame_counter3), (20, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0))
# show FPS after initial ramp up phase
if frame_counter1 >= 1001:
cv2.putText(frame1, 'fps: '+ str(framerate), (20, 60), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0))
cv2.putText(frame2, 'fps: '+str(framerate), (20, 60), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0))
cv2.putText(frame3, 'fps: '+str(framerate), (20, 60), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0))
# resize the frames for display
frame1_dis = cv2.resize(frame1, (240, 426), interpolation=cv2.INTER_LINEAR) # this resize results in the highest fps
frame2_dis = cv2.resize(frame2, (240, 426), interpolation=cv2.INTER_LINEAR)
frame3_dis = cv2.resize(frame3, (240, 426), interpolation=cv2.INTER_LINEAR)
# combine frames horizontally
combined_frames = cv2.hconcat([frame1, frame2, frame3])
combined_frames_dis = cv2.hconcat([frame1_dis, frame2_dis, frame3_dis])
return combined_frames, combined_frames_dis
################ LABSTREAMLAYER INPUTS ################
freq = 500
frame_rate = 200.0 # when it's set on 60, the max fps we get is around 40, if on 200, we get to 60
data_size = 20
stream_info = StreamInfo(name='MyWebcamFrameStream', type='frameNR', channel_count=1, channel_format='int32', nominal_srate = freq, source_id='MyWebcamFrameStream')
outlet = StreamOutlet(stream_info) # broadcast the stream
################ Execute LSL threading ################
# initialize global frame counters
frame_counter1, frame_counter2, frame_counter3 = 1, 1, 1
# open the default webcam devices
print("Starting LSL webcam: Press Q to stop!")
cap1, cap2, cap3 = open_cameras()
# specify file location of output
pcn_id = input('Enter ID: ')
time_stamp = datetime.datetime.now().strftime('%Y-%m-%d')
file_name = pcn_id + '_' + time_stamp + '_output.avi'
vidloc = os.getcwd() + '\\data\\' + file_name # Specify output location
print('Data saved in: ' + vidloc)
# set up the VideoWriter
video_writer = ffmpegcv.VideoWriter(vidloc, 'rawvideo', set_framerate) # 'h264' possible, but lower quality
# initialize the LSL threads
camera_thread = threading.Thread(target=getWebcamData, args=(cap1, cap2, cap3, video_writer))
camera_thread.start()
sendLSLFrames(camera_thread)
# notify when program has concluded
print("Stop")
3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)] Starting LSL webcam: Press Q to stop! Camera 1 opened Camera 2 opened Camera 3 opened Enter ID: test Data saved in: C:\Research_Projects\multiple_webcam_recording_for3Dtracking\data\test_2024-02-13_output.avi Stop
We would recommend checking the timing of the frames. Here we recorded a stopwatch in view by the three webcams that are recording. You will see that the webcams are showing same times, or 1ms difference at most if you inspect closely some frames. Just play and stop the video at random frames while the stopwatch is in view.
from moviepy.editor import VideoFileClip
from IPython.display import display
clip = VideoFileClip("./data/test_compr.mp4")
clip.ipython_display(width=540, height=540, autoplay=True, loop=True)
Moviepy - Building video __temp__.mp4. MoviePy - Writing audio in __temp__TEMP_MPY_wvf_snd.mp3
MoviePy - Done. Moviepy - Writing video __temp__.mp4
Moviepy - Done ! Moviepy - video ready __temp__.mp4